Goto

Collaborating Authors

 intentional stance


When AI companions become witty: Can human brain recognize AI-generated irony?

Rao, Xiaohui, Wu, Hanlin, Cai, Zhenguang G.

arXiv.org Artificial Intelligence

As Large Language Models (LLMs) are increasingly deployed as social agents and trained to produce humor and irony, a question emerges: when encountering witty AI remarks, do people interpret these as intentional communication or mere computational output? This study investigates whether people adopt the intentional stance, attributing mental states to explain behavior,toward AI during irony comprehension. Irony provides an ideal paradigm because it requires distinguishing intentional contradictions from unintended errors through effortful semantic reanalysis. We compared behavioral and neural responses to ironic statements from AI versus human sources using established ERP components: P200 reflecting early incongruity detection and P600 indexing cognitive efforts in reinterpreting incongruity as deliberate irony. Results demonstrate that people do not fully adopt the intentional stance toward AI-generated irony. Behaviorally, participants attributed incongruity to deliberate communication for both sources, though significantly less for AI than human, showing greater tendency to interpret AI incongruities as computational errors. Neural data revealed attenuated P200 and P600 effects for AI-generated irony, suggesting reduced effortful detection and reanalysis consistent with diminished attribution of communicative intent. Notably, people who perceived AI as more sincere showed larger P200 and P600 effects for AI-generated irony, suggesting that intentional stance adoption is calibrated by specific mental models of artificial agents. These findings reveal that source attribution shapes neural processing of social-communicative phenomena. Despite current LLMs' linguistic sophistication, achieving genuine social agency requires more than linguistic competence, it necessitates a shift in how humans perceive and attribute intentionality to artificial agents.


Perfect AI Mimicry and the Epistemology of Consciousness: A Solipsistic Dilemma

Li, Shurui

arXiv.org Artificial Intelligence

Rapid advances in artificial intelligence necessitate a re - examination of the epistemological foundations upon which we attribute consciousness. As AI systems increasingly mimic human behavior and interaction with high fidelity, the concept of a "perfect m imic" -- an entity empirically indistinguishable from a human through observation and interaction -- shifts from hypothetical to technologically plausible. This paper argues that such developments pose a fundamental challenge to the consistency of our mind - recog nition practices. Consciousness attributions rely heavily, if not exclusively, on empirical evidence derived from behavior and interaction. If a perfect mimic provides evidence identical to that of humans, any refusal to grant it equivalent epistemic statu s must invoke inaccessible factors, such as qualia, substrate requirements, or origin. Selectively invoking such factors risks a debilitating dilemma: either we undermine the rational basis for attributing consciousness to others (epistemological solipsism), or we accept inconsistent reasoning. I contend that epistemic consistency demands we ascribe the same status to empirically indistinguishable entities, regardless of metaphysical assumptions. The perfect mimic thus acts as an epistemic mirror, forcing c ritical reflection on the assumptions underlying intersubjective recognition in light of advancing AI. This analysis carries significant implications for theories of consciousness and ethical frameworks concerning artificial agents .


People Attribute Purpose to Autonomous Vehicles When Explaining Their Behavior

Gyevnar, Balint, Droop, Stephanie, Quillien, Tadeg, Cohen, Shay B., Bramley, Neil R., Lucas, Christopher G., Albrecht, Stefano V.

arXiv.org Artificial Intelligence

Cognitive science can help us understand which explanations people might expect, and in which format they frame these explanations, whether causal, counterfactual, or teleological (i.e., purpose-oriented). Understanding the relevance of these concepts is crucial for building good explainable AI (XAI) which offers recourse and actionability. Focusing on autonomous driving, a complex decision-making domain, we report empirical data from two surveys on (i) how people explain the behavior of autonomous vehicles in 14 unique scenarios (N1=54), and (ii) how they perceive these explanations in terms of complexity, quality, and trustworthiness (N2=356). Participants deemed teleological explanations significantly better quality than counterfactual ones, with perceived teleology being the best predictor of perceived quality and trustworthiness. Neither the perceived teleology nor the quality were affected by whether the car was an autonomous vehicle or driven by a person. This indicates that people use teleology to evaluate information about not just other people but also autonomous vehicles. Taken together, our findings highlight the importance of explanations that are framed in terms of purpose rather than just, as is standard in XAI, the causal mechanisms involved. We release the 14 scenarios and more than 1,300 elicited explanations publicly as the Human Explanations for Autonomous Driving Decisions (HEADD) dataset.


Talking about Large Language Models

Communications of the ACM

Interacting with a contemporary LLM-based conversational agent can create an illusion of being in the presence of a thinking creature. Yet, in their very nature, such systems are fundamentally not like us.


Talking About Large Language Models

Shanahan, Murray

arXiv.org Artificial Intelligence

Thanks to rapid progress in artificial intelligence, we have entered an era when technology and philosophy intersect in interesting ways. Sitting squarely at the centre of this intersection are large language models (LLMs). The more adept LLMs become at mimicking human language, the more vulnerable we become to anthropomorphism, to seeing the systems in which they are embedded as more human-like than they really are. This trend is amplified by the natural tendency to use philosophically loaded terms, such as "knows", "believes", and "thinks", when describing these systems. To mitigate this trend, this paper advocates the practice of repeatedly stepping back to remind ourselves of how LLMs, and the systems of which they form a part, actually work. The hope is that increased scientific precision will encourage more philosophical nuance in the discourse around artificial intelligence, both within the field and in the public sphere.


Taking the Intentional Stance Seriously, or "Intending" to Improve Cognitive Systems

Bridewell, Will

arXiv.org Artificial Intelligence

Finding claims that researchers have made considerable progress in artificial intelligence over the last several decades is easy. However, our everyday interactions with cognitive systems (e.g., Siri, Alexa, DALL-E) quickly move from intriguing to frustrating. One cause of those frustrations rests in a mismatch between the expectations we have due to our inherent, folk-psychological theories and the real limitations we experience with existing computer programs. The software does not understand that people have goals, beliefs about how to achieve those goals, and intentions to act accordingly. One way to align cognitive systems with our expectations is to imbue them with mental states that mirror those we use to predict and explain human behavior. This paper discusses these concerns and illustrates the challenge of following this route by analyzing the mental state 'intention.' That analysis is joined with high-level methodological suggestions that support progress in this endeavor.


Peirce's Semiotics and General Intelligence

#artificialintelligence

There is a natural evolution from the ideas that deep learning has empirical revealed to a theory of general intelligence. A common criticism of deep learning is its lack of good theory. Deep learning is like the supercolliders in high energy physics. It reveals the inner behavior of an artificial intuitive process. It reveals to us patterns of what does work. To build up that theory we must walk back into the ideas of past thinkers. Thinkers who have never seen the empirical evidence. What will they conclude about their ideas if they had been exposed to evidence in deep learning?


Robot Mindreading and the Problem of Trust

Páez, Andrés

arXiv.org Artificial Intelligence

This paper raises three questions regarding the attribution of beliefs, desires, and intentions to robots. The first one is whether humans in fact engage in robot mindreading. If they do, this raises a second question: does robot mindreading foster trust towards robots? Both of these questions are empirical, and I show that the available evidence is insufficient to answer them. Now, if we assume that the answer to both questions is affirmative, a third and more important question arises: should developers and engineers promote robot mindreading in view of their stated goal of enhancing transparency? My worry here is that by attempting to make robots more mind-readable, they are abandoning the project of understanding automatic decision processes. Features that enhance mind-readability are prone to make the factors that determine automatic decisions even more opaque than they already are. And current strategies to eliminate opacity do not enhance mind-readability. The last part of the paper discusses different ways to analyze this apparent trade-off and suggests that a possible solution must adopt tolerable degrees of opacity that depend on pragmatic factors connected to the level of trust required for the intended uses of the robot.


How "intelligent" can Artificial Intelligence get?

#artificialintelligence

This post is the second in a series of three posts, each of which discuss the fundamental concepts of Artificial Intelligence. In our first post we discussed AI definitions, helping our readers to understand the basic concepts behind AI, giving them the tools required to sift through the many AI articles out there and form their own opinion. In this second post, we will discuss several notions which are important in understanding the limits of AI. Figure 1: How intelligent can Artificial Intelligence get? When we speak about how far AI can go, there are two "philosophies": strong AI and weak AI. The most commonly followed philosophy is that of weak AI, which means that machines can manifest certain intelligent behavior to solve specific (hard) tasks, but that they will never equal the human mind.


Is anyone in AI/Machine Learning community working on realizing Daniel Dennett's Stances artificially? • /r/artificial

#artificialintelligence

The core idea is that, when understanding, explaining and/or predicting the behavior of an object, we can choose to view it at varying levels of abstraction. The more concrete the level, the more accurate in principle our predictions are; the more abstract, the greater the computational power we gain by zooming out and skipping over the irrelevant details. Dennett defines three levels of abstraction, attained by adopting one of three entirely different "stances", or intellectual strategies: the physical stance; the design stance; and the intentional stance: The most concrete is the physical stance, the domain of physics and chemistry, which makes predictions from knowledge of the physical constitution of the system and the physical laws that govern its operation; and thus, given a particular set of physical laws and initial conditions, and a particular configuration, a specific future state is predicted (this could also be called the "structure stance").[15] At this level, we are concerned with such things as mass, energy, velocity, and chemical composition. When we predict where a ball is going to land based on its current trajectory, we are taking the physical stance.